Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Noise image segmentation by adaptive wavelet transform based on artificial bee swarm and fuzzy C-means
SHI Xuesong, LI Xianhua, SUN Qing, SONG Tao
Journal of Computer Applications    2021, 41 (8): 2312-2317.   DOI: 10.11772/j.issn.1001-9081.2020101684
Abstract289)      PDF (3644KB)(267)       Save
Aiming at the problem that traditional Fuzzy C-Means (FCM) clustering algorithm is easily affected by noise in processing noise images, a noise image segmentation method of wavelet domain feature enhancement based on FCM was proposed. Firstly, the noise image was decomposed by two-dimensional wavelet. Secondly, the approximate coefficient was enhanced at the edge, and Artificial Bee Colony (ABC) optimization algorithm was used to perform threshold processing to the detail coefficients, and then the wavelet reconstruction was carried out for the processed coefficients. Finally, the reconstructed image was segmented by FCM algorithm. Five typical grayscale images were selected, and were added with Gaussian noise and salt-and-pepper noise respectively. Various methods were used to segment them, and the Peak Signal-to-Noise Ratio (PSNR) and Misclassification Error (ME) of the segmented images were taken as performance indicators. Experimental results show that the PSNR of the images segmented by the proposed method is at most 281% and 54% higher than the PSNR of the images segmented by the traditional FCM clustering algorithm segmentation method and Particle Swarm Optimization (PSO) segmentation method respectively, and the segmented images of the proposed method has the ME at most 55% and 41% lower than those of the comparison methods respectively. It can be seen that the proposed segmentation method preserves the edge texture information well, and the anti-noise and segmentation performance of this method are improved.
Reference | Related Articles | Metrics
Application of Transformer optimized by pointer generator network and coverage loss in field of abstractive text summarization
LI Xiang, WANG Weibing, SHANG Xueda
Journal of Computer Applications    2021, 41 (6): 1647-1651.   DOI: 10.11772/j.issn.1001-9081.2020091375
Abstract480)      PDF (836KB)(490)       Save
Aiming at the application scenario of abstractive text summarization, a Transformer-based summarization model with Pointer Generator network and Coverage Loss added to the Transformer model for optimization was proposed. First, the method based on the Transformer model as the basic structure was proposed, and its attention mechanism was used to better capture the semantic information of the context. Then, the Coverage Loss was introduced into the loss function of the model to punish the distribution and coverage of repeated words, so as to solve the problem that the attention mechanism in the Transformer model continuously generates the same word in abstractive tasks. Finally, the Pointer Generator network was added to the model, which allowed the model to copy words from the source text as generated words to solve the Out of Vocabulary (OOV) problem. Whether the improved model reduced inaccurate expressions and whether the phenomenon of repeated occurrence of the same word was solved were explored. Compared with the original Transformer model, the improved model improved the score on ROUGE-L evaluation function by 1.98 percentage points, the score on ROUGE-2 evaluation function by 0.95 percentage points, and the score on ROUGE-L evaluation function by 2.27 percentage points, and improved the readability and accuracy of the summarization results. Experimental results show that Transformer can be applied to the field of abstractive text summarization after adding Coverage Loss and Pointer Generator network.
Reference | Related Articles | Metrics
Collaborative filtering method fusing overlapping community regularization and implicit feedback
LI Xiangkun, JIA Caiyan
Journal of Computer Applications    2021, 41 (1): 53-59.   DOI: 10.11772/j.issn.1001-9081.2020060995
Abstract327)      PDF (956KB)(400)       Save
Aiming at the problems of data sparsity and cold start in the current recommendation system, a collaborative filtering method fusing Overlapping Community Regularization and Implicit Feedback (OCRIF) was proposed, which not only considers the community structure of users in the social network, but also integrates the implicit feedback of user rating information and social information into the recommendation model. In addition, as network representation learning can effectively learn the nodes? neighbor information on global structure of social network, a network representation learning enhanced OCRIF (OCRIF+) was proposed, which combines the low dimensional representation of users in social network with user commodity features, and can represent the similarity between the users and the membership degrees of the users to the interest communities more effectively. Experimental results on multiple real datasets show that the proposed method is superior to the similar methods on the recommendation effect. Compared with TrustSVD (Trust Support Vector Machine) method, the proposed method has the Root Mean Square Error (RMSE) decreased by 2.74%, 2.55% and 1.83% respectively, and Mean Absolute Error (MAE) decreased by 3.47%, 2.97% and 2.40% respectively on FilmTrust, DouBan and Ciao datasets.
Reference | Related Articles | Metrics
News named entity recognition and sentiment classification based on attention-based bi-directional long short-term memory neural network and conditional random field
HU Tiantian, DAN Yabo, HU Jie, LI Xiang, LI Shaobo
Journal of Computer Applications    2020, 40 (7): 1879-1883.   DOI: 10.11772/j.issn.1001-9081.2019111965
Abstract971)      PDF (864KB)(949)       Save
Attention-based Bi-directional Long Short-Term Memory neural network and Conditional Random Field (AttBi-LSTM-CRF) model was proposed for the corpus core entity recognition and core entity sentiment analysis task of Sohu coreEntityEmotion_train. Firstly, the text was pre-trained, each word was mapped into a low-dimensional vector with the same dimension. Then, these vectors were input into the Attention-based Bi-directional Long Short-Term Memory neural network (AttBi-LSTM) to obtain the long-term context information and focus on the information highly related to the output label. Finally, the optimal label of the entire sequence was obtained through the Conditional Random Field ( CRF) layer. The comparison experiments were conducted among AttBi-LSTM-CRF model, Bi-directional Long Short-Term Memory neural network (Bi-LSTM), AttBi-LSTM and Bi-directional Long Short-Term Memory neural network and Conditional Random Field (Bi-LSTM-CRF) model. The experimental results show that, the accuracy of AttBi-LSTM-CRF model is 0.78, the recall is 0.667, and the F1 value is 0.553, which are better than those of the comparison models. The superiority of AttBi-LSTM-CRF performance is verified.
Reference | Related Articles | Metrics
Segmentation of nasopharyngeal neoplasms based on random forest feature selection algorithm
LI Xian, WANG Yan, LUO Yong, ZHOU Jiliu
Journal of Computer Applications    2019, 39 (5): 1485-1489.   DOI: 10.11772/j.issn.1001-9081.2018102205
Abstract391)      PDF (796KB)(345)       Save
Due to the low grey-level contrast and blurred boundaries of organs in medical images, a Random Forest (RF) feature selection algorithm was proposed to segment nasopharyngeal neoplasms MR images. Firstly, gray-level, texture and geometry information was extracted from nasopharyngeal neoplasms images to construct a random forest classifier. Then, feature importances were measured by the random forest, and the proposed feature selection method was applied to the original handcrafted feature set. Finally, the optimal feature subset obtained from the feature selection process was used to construct a new random forest classifier to make the final segmentation of the images. Experimental results show that the performances of the proposed algorithm are:dice coefficient 79.197%, accuracy 97.702%, sensitivity 72.191%, and specificity 99.502%. By comparing with the conventional random forest based and Deep Convolution Neural Network (DCNN) based segmentation algorithms, it is clearly that the proposed feature selection algorithm can effectively extract useful information from the nasopharyngeal neoplasms MR images and improve the segmentation accuracy of nasopharyngeal neoplasms under small sample circumstance.
Reference | Related Articles | Metrics
Cardiac arrhythmia detection algorithm based on deep long short-term memory neural network model
YANG Shuo, PU Baoming, LI Xiangze, WANG Shuai, CHANG Zhanguo
Journal of Computer Applications    2019, 39 (3): 930-934.   DOI: 10.11772/j.issn.1001-9081.2018081677
Abstract514)      PDF (762KB)(333)       Save

Aiming at the problems of inaccurate feature extraction and high complexity of traditional ElectroCardioGram (ECG) detection algorithms based on morphological features, an improved Long Short-Term Memory (LSTM) neural network was proposed. Based on the advantage of traditional LSTM model in time series data processing, the proposed model added reverse and depth calculations which avoids extraction of waveform features artificially and strengthens learning ability of the network. And supervised learning was performed in the model according to the given heart beat sequences and category labels, realizing the arrhythmia detection of unknown heart beats. The experimental results on the arrhythmia datasets in MIT-BIH database show that the overall accuracy of the proposed method reaches 98.34%. Compared with support vector machine, the accuracy and F1 value of the model are both improved.

Reference | Related Articles | Metrics
Sparse non-negative matrix factorization based on kernel and hypergraph regularization
YU Jianglan, LI Xiangli, ZHAO Pengfei
Journal of Computer Applications    2019, 39 (3): 742-749.   DOI: 10.11772/j.issn.1001-9081.2018071617
Abstract418)      PDF (1229KB)(316)       Save
Focused on the problem that when traditional Non-negative Matrix Factorization (NMF) is applied to clustering, robustness and sparsity are not considered at the same time, which leads to low clustering performance, a sparse Non-negative Matrix Factorization algorithm based on Kernel technique and HyperGraph regularization (KHGNMF) was proposed. Firstly, on the basis of inheriting good performance of kernel technique, L 2,1 norm was used to improve F-norm of standard NMF, and hyper-graph regularization terms were added to preserve inherent geometric structure information among the original data as much as possible. Secondly, L 2,1/2 pseudo norm and L 1/2 regularization terms were merged into NMF model as sparse constraints. Finally, a new algorithm was proposed and applied to image clustering. The experimental results on six standard datasets show that KHGNMF can improve clustering performance (accuracy and normalized mutual information) by 39% to 54% compared with nonlinear orthogonal graph regularized non-negative matrix factorization, and the sparsity and robustness of the proposed algorithm are increased and the clustering effect is improved.
Reference | Related Articles | Metrics
Imperialist competitive algorithm based on multiple search strategy for solving traveling salesman problem
CHEN Menghui, LIU Junlin, XU Jianfeng, LI Xiangjun
Journal of Computer Applications    2019, 39 (10): 2992-2996.   DOI: 10.11772/j.issn.1001-9081.2019030434
Abstract318)      PDF (802KB)(230)       Save
The imperialist competitive algorithm is a swarm intelligence optimization algorithm with strong local search ability, but excessive local search will lead to the loss of diversity and fall into local optimum. Aiming at this problem, an Imperialist Competitive Algorithm based on Multiple Search Strategy (MSSICA) was proposed. The country was defined as a feasible solution and the kingdoms were defined as four mechanisms of combinatorial artificial chromosome with different characteristics. The block mechanism was used to retain the dominant solution fragment during search and differentiated mechanisms of combinatorial artificial chromosome was used for different empires to search the effective and feasible solution information of different solution spaces. When it come to the local optimum, the multiple search strategy was used to inject a uniformly distributed feasible solution to replace a less advantageous solution to enhance the diversity. Experimental results show that the multiple search strategy can effectively improve diversity of the imperialist competitive algorithm and improve the quality and stability of the solution.
Reference | Related Articles | Metrics
Efficient block-based sampling algorithm for aggregation query processing on duplicate charged records
PAN Mingyu, ZHANG Lu, LONG Guobiao, LI Xianglong, MA Dongxue, XU Liang
Journal of Computer Applications    2018, 38 (6): 1596-1600.   DOI: 10.11772/j.issn.1001-9081.2017112632
Abstract377)      PDF (982KB)(310)       Save
The existing query analysis methods usually treat the entity resolution as an offline preprocessing process to clean the whole data set. However, with the continuous increasing of data size, such offline cleaning mode with high computing complexity has been difficult to meet the needs of real-time analysis in most applications. In order to solve the problem of aggregation query on duplicate charged records, a new method integrating entity resolution with approximate aggregation query processing was proposed. Firstly, a block-based sampling strategy was adopted to collect samples. Then, an entity recognition method was used to identify the duplicate entities on the sampled samples. Finally, the unbiased estimation of aggregated results was reconstructed according to the results of entity recognition. The proposed method avoids the time cost of identifying all entities, and returns the query results that satisfy user needs by identifying only a small number of sample data. The experimental results on both real dataset and synthetic dataset demonstrate the efficiency and reliability of the proposed method.
Reference | Related Articles | Metrics
Time-frequency combination de-noising algorithm based on orthogonal frequency division multiplexing/offset quadrature amplitude modulation in power line communication system
ZHENG Jianhong, ZHANG Heng, LI Fei, LI Xiang, DENG Zhan
Journal of Computer Applications    2018, 38 (1): 228-232.   DOI: 10.11772/j.issn.1001-9081.2017071727
Abstract376)      PDF (790KB)(261)       Save
Focusing on the issue that the impulse noise in Power Line Communication (PLC) system greatly affects the transmission performance, and most traditional de-noising algorithm can not effectively suppress the impulse noise, a time-frequency combination de-noising algorithm was proposed. Firstly, the impulse noise with large peak in the time domain received signal was detected and zeroed by selecting the appropriate threshold. Secondly, according to the symbols that had been decided in the frequency domain, the smaller impulse noise which had not eliminated in the time domain was reconstructed, and the accuracy of the noise reconstruction was improved by iteration. Finally, the reconstructed impulse noise was subtracted from the frequency domain received signal. Simulation experiments were conducted under the multipath channel of the power line. Compared with traditional time domain and frequency domain de-noising algorithms, the proposed algorithm could achieve the performance improvement of 2dB and 0.5dB respectively when the bit-error rate was 0.01. And as the bit-error rate decreased, the performance gap between them would be even greater. The simulation results show that the proposed time-frequency combination de-noising algorithm can improve the resistance of the PLC system to impulse noise.
Reference | Related Articles | Metrics
Scale adaptive improvement of kernel correlation filter tracking algorithm
QIAN Tanghui, LUO Zhiqing, LI Guojia, LI Yingyun, LI Xiankai
Journal of Computer Applications    2017, 37 (3): 811-816.   DOI: 10.11772/j.issn.1001-9081.2017.03.811
Abstract560)      PDF (961KB)(591)       Save
To solve the problem that Circulant Structure of tracking-by-detection with Kernels (CSK) is difficult to adapt to the target scale change, a multi-scale kernel correlation filter classifier was proposed to realize the scale adaptive target tracking. Firstly, the multi-scale image was used to construct the sample set, the multi-scale kernel correlation filtering classifier was trained by the sample set, for target size estimation to achieve the goal of the optimal scale detection, and then the samples collected on the optimal target scale were used to update the classifier on-line to achieve the scale-adaptive target tracking. The comparative experiments and analysis illustrate that the proposed algorithm can adapt to the scale change of the target in the tracking process, the error of the eccentricity is reduced to 1/5 to 1/3 that of CSK algorithm, which can meet the needs of long time tracking in complex scenes.
Reference | Related Articles | Metrics
Group consensus of heterogeneous multi-Agent systems with time delay
LI Xiangjun, LIU Chenglin, LIU Fei
Journal of Computer Applications    2016, 36 (5): 1439-1444.   DOI: 10.11772/j.issn.1001-9081.2016.05.1439
Abstract542)      PDF (892KB)(522)       Save
Concerning the stationary group consensus problem for the heterogeneous multi-Agent systems, which are composed of first-order Agents and second-order Agents, two stationary group consensus protocols were proposed under fixed interconnection topology and switching interconnection topologies respectively. By constructing Lyapunov-Krasovskii functions, the sufficient conditions, which are formulated as linear matrix inequalities, were obtained for the system converging to the group consensus asymptotically under the group consensus algorithm with identical time-varying communication delay. Finally, the simulation results show that the heterogeneous multi-Agent systems with time delay converg to the group consensus asymptotically under certain conditions.
Reference | Related Articles | Metrics
Simulated annealing algorithm for solving the two-species small phylogeny problem
WU Jingli, LI Xiancheng
Journal of Computer Applications    2016, 36 (4): 1027-1032.   DOI: 10.11772/j.issn.1001-9081.2016.04.1027
Abstract536)      PDF (872KB)(412)       Save
In order to solve the two-species Small Phylogeny Problem (SPP) in the duplication-loss model, a simulated annealing algorithm named SA2SP was devised for the duplication-loss alignment problem. An alignment algorithm was introduced to construct the initial solution; a labeling algorithm was used to construct the object function and obtain the evolution cost; and three intelligent neighborhood functions were introduced to generate neighborhood solutions by using the evolutionary characteristics of gene sequences. The ribosomal RiboNucleic Acid (rRNA) and transfer Ribonucleic Acid (tRNA) of four real bacterium were used to test the performance of SA2SP and Pseudo-Boolean Linear Programming (PBLP) algorithm. The experimental results show that the SA2SP algorithm has smaller evolution cost, and it is an effective method for solving the two-species SPP in the duplication-loss model.
Reference | Related Articles | Metrics
Optimized AODV routing protocol to avoid route breaks
LI Xiangli JING Ruixia HE Yihan
Journal of Computer Applications    2014, 34 (9): 2468-2471.   DOI: 10.11772/j.issn.1001-9081.2014.09.2468
Abstract194)      PDF (653KB)(485)       Save

In Mobile Ad Hoc Network (MANET), the movements of nodes are liable to cause link failures, while the local repair in the classic Ad Hoc On-demand Distance Vector (AODV) routing algorithm is performed only after the link breaks, which has some limitations and may result in the cached data packet loss when the repair process fails or goes on too slowly. In order to solve this problem, an optimized AODV routing algorithm named ARB-AODV was proposed, which can avoid route breaks. In ARB-AODV algorithm, the link which seemed to break was predicted and the stability degrees of the nodes' neighbors were calculated. Then the node with the highest stability was added to the weak link to eliminate the edge effect of nodes and avoid route breaks. Experiments were conducted on NS-2 platform using Random Waypoint Mobility Model (RWM) and Constant Bit Rate (CBR) data. When the nodes moved at a speed higher than 10m/s, the packet delivery ratio of ARB-AODV algorithm maintained at 80% or even higher, the average end-to-end delay declined up to 40% and the overhead of normalized routing declined up to 15% compared with AODV. The simulation results show that ARB-AODV outperforms AODV, and it can effectively improve network performance.

Reference | Related Articles | Metrics
PM2.5 concentration prediction model of least squares support vector machine based on feature vector
LI Long MA Lei HE Jianfeng SHAO Dangguo YI Sanli XIANG Yan LIU Lifang
Journal of Computer Applications    2014, 34 (8): 2212-2216.   DOI: 10.11772/j.issn.1001-9081.2014.08.2212
Abstract472)      PDF (781KB)(1156)       Save

To solve the problem of Fine Particulate Matter (PM2.5) concentration prediction, a PM2.5 concentration prediction model was proposed. First, through introducing the comprehensive meteorological index, the factors of wind, humidity, temperature were comprehensively considered; then the feature vector was conducted by combining the actual concentration of SO2, NO2, CO and PM10; finally the Least Squares Support Vector Machine (LS-SVM) prediction model was built based on feature vector and PM2.5 concentration data. The experimental results using the data from the city A and city B environmental monitoring centers in 2013 show that, the forecast accuracy is improved after the introduction of a comprehensive weather index, error is reduced by nearly 30%. The proposed model can more accurately predict the PM2.5 concentration and it has a high generalization ability. Furthermore, the author analyzed the relationship between PM2.5 concentration and the rate of hospitalization, hospital outpatient service amount, and found a high correlation between them.

Reference | Related Articles | Metrics
News topic mining method based on weighted latent Dirichlet allocation model
LI Xiangdong BA Zhichao HUANG Li
Journal of Computer Applications    2014, 34 (5): 1354-1359.   DOI: 10.11772/j.issn.1001-9081.2014.05.1354
Abstract399)      PDF (969KB)(485)       Save

To solve the problems such as low accuracy and poor interpretability of traditional news topic mining, a new method was proposed based on weighted Latent Dirichlet Allocation (LDA) that combined with the information structure characters of the news. Firstly, the vocabulary weights were improved from different angles and the composite weights were built, the more expressive words were got by extending the process of feature items generated by the LDA model. Secondly, the Category Distinguish Word (CDW) method was used to optimize the word order of the generated result, which could reduce the noise and the ambiguity of the topics and improve the interpretability of the topics. Finally, according to the mathematical characteristics of the probability distribution model of the topics, the topics were quantified in terms of the contribution degree from the documents to the topics and the topics weight probability to get the hot topics. The simulation results show that the false negative rate and false positive rate of the weighted LDA model drop by an average of 1.43% and 0.16% compared with the traditional LDA model, and the minimum standard price drops by an average of 2.68%. It confirms the feasibility and effectiveness of this method.

Reference | Related Articles | Metrics
Built-in determined sub-key correlation power analysis
LI Jinliang YU Yu FU Rong LI Xiangxue
Journal of Computer Applications    2014, 34 (5): 1283-1287.   DOI: 10.11772/j.issn.1001-9081.2014.05.1283
Abstract345)      PDF (908KB)(297)       Save

To study the Built-in determined Sub-key Correlation Power Analysis (BS-CPA) proposed by Yuichi Komano et al.(KOMANO Y, SHIMIZU H, KAWAMURA S. BS-CPA: built-in determined sub-key correlation power analysis. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2010,E93-A(9):1632-1638.) based on the data set of dpacontest.org, this paper compared the efficiency of Differential Power Analysis (DPA), Correlation Power Analysis (CPA) and BS-CPA from the number of power consumption trace and success rate, the result shows that although BS-CPA works out nicely in theory, it is far from the reaching of the efficiency claimed by the authors, and then the intermediate was chosen by the relationship between the statement of executed cryptographic device’s register and power consumption. Attack surface was narrowed by the reduction of noise and ghost peak, the most relative point was filtered out. Compared with the whole point attack, the biggest success rate of partial point attack can be increased by 60% to crack the 64 bit keys for the same number traces. The experiment results prove that the improved model is able to increase the efficiency and decrease the needed power consumption trace for the same success rate, and the result is stable.

Reference | Related Articles | Metrics
Tax forecasting based on Adaboost algorithm and BP neural network
LI Xiang ZHU Quan-yin
Journal of Computer Applications    2012, 32 (12): 3558-3560.   DOI: 10.3724/SP.J.1087.2012.03558
Abstract901)      PDF (570KB)(569)       Save
In view of the lower accuracy of traditional tax forecasting models, the authors put forward a method of combining the Adaboost algorithm with BP neural network to forecast revenue. Firstly, the method performed the pretreatment for the historical tax data and initialized the distribution weights of test data; secondly, it initialized the weights and thresholds of BP neural network, and used BP neural network as a weak predictor to train the tax data repeatedly and adjust the weights; finally, it made more weak predictors of BP neural network to form new strong predictors by Adaboost algorithm and forecasted. The authors also carried out simulation experiment for the tax data of China from 1990 to 2010. The results show that this method has reduced the relative value of mean error from 0.50% to 0.18% compared to the traditional BP network, has effectively reduced the effect when single BP gets trapped in local minima, and has improved the prediction accuracy of network.
Related Articles | Metrics
Distributed computation method for traffic noise mapping based on service object-oriented architecture
LI Nan FENG Tao LIU Bin LI Xian-hui LIU Lei
Journal of Computer Applications    2012, 32 (08): 2146-2149.   DOI: 10.3724/SP.J.1087.2012.02146
Abstract895)      PDF (704KB)(429)       Save
Current urban traffic noise mapping systems are not ideal for big scale project distributed computing in dynamic network. This paper proposed a noise mapping distributed computation method based on loosely-coupled services and the mechanism of Service Object Oriented Architecture (SOOA), investigated the generation approach of noise propagation calculation service, and introduced the deployment and management of services in the proposed system. At last, a demonstration indicated that the distributed computation approach considerably reduced the overhead of calculation and supplied flexible system architecture at the same time. The experimental results show that the imbalance of parallel subtasks will affect the parallel efficiency. Under normal circumstances, parallel efficiency can reach over 85%.
Reference | Related Articles | Metrics
Optimized scheme about FMIPv6 with π-calculus verification
LI Xiang-li WANG Xiao-yan WANG Zheng-bin QU Zhi-wei
Journal of Computer Applications    2012, 32 (08): 2095-2102.   DOI: 10.3724/SP.J.1087.2012.02095
Abstract815)      PDF (879KB)(306)       Save
In order to solve the problems of long handover delay and high packet loss rate existing in FMIPv6, an improved scheme named PI-FMIPv6 was designed. Information learning, proxy binding and the tunnel timer were introduced into it so as to complete the configuration of the New Care-of Address (NCoA), Duplicate Address Detection (DAD), Binding Update (BU) by advancing and managing the tunnel. π-calculus was used to define and deduce the mathematical model about PI-FMIPv6. It is proved that the optimized scheme PI-FMIPv6 is standard and precise. Furthermore, the simulation results from the NS-2 show that PI-FMIPv6 can reduce the handover delay by 60.7% and packet loss rate by 61.5% at least compared to FMIPv6, which verifies that the PI-FMIPv6 is superior to FMIPv6 and can better meet the real-time requirement.
Reference | Related Articles | Metrics
Optimization of macro-handover in hierarchical mobile IPv6
LI Xiangli SUN Xiaolin GAO Yanhong WANG Weifeng LIU Dawei
Journal of Computer Applications    2011, 31 (06): 1469-1471.   DOI: 10.3724/SP.J.1087.2011.01469
Abstract1063)      PDF (493KB)(438)       Save
The macro handover has caused high packet loss and long handover latency in Hierarchical Mobile IPv6 (HMIPv6) protocol. To solve these problems, this paper proposed a protocol named Tunnel-based Fast Macro-Handover (TBFMH), which introduced the mechanism of tunnel, acquired care-of addresses on the grounds of handover information, conducted duplication address detection in advance and completed local binding update while building the tunnels. The simulation results show that TBFMH can decrease the handover latency by 50% at least and reduce the packet loss rate compared to HMIPv6, which effectively improves the performance in the macro handover.
Related Articles | Metrics
Improved FP-Growth algorithm and its applications in the business association
ZHAO Xiao-Min 赵孝敏 HE Song-Hua LI Xian-Peng YIN Bo
Journal of Computer Applications   
Abstract1976)      PDF (1036KB)(981)       Save
The FP-Growth algorithm, based on FP-Tree, needs to create a large number of conditional FP-Trees recursively in the process of mining frequent patterns. It is not efficient and not good to apply in mobile communication business cross-selling, in which the association rules mining is business-constraint. Therefore, an items-constraint frequent pattern tree ICFP-Tree and a new ICFP-Mine algorithm which directly mines in the tree were proposed. Theoretical analysis and experimental results show that the ICFP-Mine algorithm is superior to FP-Growth algorithm in memory occupancy and time costs. It has achieved better results in the field of mobile communication business cross-selling applications.
Related Articles | Metrics
Study on improving the convergence of genetic neural networks
LI Xiang-mei,ZHAO Tian-yun
Journal of Computer Applications    2005, 25 (12): 2789-2791.  
Abstract1904)      PDF (563KB)(1242)       Save
To describe the advantage and shortcoming of gradient descent algorithm and genetic algorithm for training connection weights of neural networks,a new algorithm combined genetic algorithm with gradient descent algorithm was proposed,referred as to Hybrid Intelligence learning algorithm(HI).Applied to the problem of optimizing the connection weight of the feedforward neural networks,the algorithm was feasible.The design and realization of HI was introduced.And it was proved that hybrid intelligence learning algorithm is better,faster and more accurate than gradient descent algorithm and genetic algorithm in theory and practice.
Related Articles | Metrics
Design and implementation of data consistency of remote data replicating system
LING Zong-hu,LI Xian-guo,HAN Zhi-yong
Journal of Computer Applications    2005, 25 (11): 2638-2640.  
Abstract1200)      PDF (838KB)(1357)       Save
A method was presented to keep data consistency and data view consistency and to optimize system performance in a remote disaster-tolerate data replication system.It used log volume to record the user’s acquirement,and maintained data consistency by keeping block update order between primary and backup volumes,and adopted different transport methods according to different data types.The architecture,implementation method and key techniques of the method were discussed.
Related Articles | Metrics
Digital watermarking algorithm based on phase and amplitude of DFT domain
CAO Rong,WANG Ying,LI Xiang-lin
Journal of Computer Applications    2005, 25 (11): 2536-2537.  
Abstract1470)      PDF (840KB)(1379)       Save
A meaningful watermark was embedded into phase and amplitude components of block DFT of image seperately,then their robustnesses were compared under the same distortion.Combined with phase and amplitude components,a novel DFT domain watermarking algorithm was proposed.The watermark could be detected without the primitive image.Experiment results show that the algorithm is easy to realize and the watermarking is invisible,moreover,the algorithm is robust to JPEG compression,resizing,rotation,median filtering and noise perturbation.
Related Articles | Metrics
Design and formal description of security policy coordination for heterogeneous information systems
HAN Zhi-yong, WANG Ping, NI Yong, LI Xian-guo
Journal of Computer Applications    2005, 25 (07): 1565-1567.   DOI: 10.3724/SP.J.1087.2005.01565
Abstract1191)      PDF (455KB)(793)       Save

In heterogeneous information systems, some access control techniques and policies were adopted to protect the resources. It was necessary to coordinate security policies between interconnected enterprises. By this way the information could be shared effectively. A primitive ticket-based authorization model was proposed to manage disparate policies in information enclaves. The formal description and the computation of the privilege were also given.

Reference | Related Articles | Metrics
Promela modeling and analysis for security protocol
LONG Shi-gong,WANG Qiao-li,LI Xiang
Journal of Computer Applications    2005, 25 (07): 1548-1550.   DOI: 10.3724/SP.J.1087.2005.01548
Abstract1167)      PDF (460KB)(782)       Save

The normal model checking technology to analyse security protocol was introduce. As an example, a model for Needham-Schroeder Public-Key Protocol was constructed by using Promela language. SPIN was used to check and discover an attack upon the protocol. The method is easy to extend to check the security protocol which involves several agents.

Reference | Related Articles | Metrics
Symbolic model checking analysis for cryptographic protocol
LONG Shi-gong, LUO Wen-jun, LI Xiang
Journal of Computer Applications    2005, 25 (01): 138-140.   DOI: 10.3724/SP.J.1087.2005.0138
Abstract989)      PDF (149KB)(1161)       Save
 A method was given to analyze the cryptographic protocol using a model checker in theory. A concrete example was given using SMV kits. Results show that the method using symbol model checker can discover replay attacks upon some cryptographic protocols and is effective.
Related Articles | Metrics